Skip to main content

Linkerd

Linkerd is an open-source network proxy installed as a service mesh for Kubernetes. It makes running services easier and safer by giving you runtime debugging, observability, reliability, and security all without requiring any changes to the code. Linkerd was designed to solve issues related to the operation and management of large applications and systems. Interaction between services is a critical component of an application's runtime behavior. Therefore, by providing a layer of abstraction to control this communication, Linkerd provides developers with more visibility and reliability. Without a dedicated layer of control, it can be difficult to measure and diagnose application problems and failures.

What Exactly is a Service Mesh?

A service mesh is a dedicated infrastructure layer made to control service-to-service communication and allow separate parts of an application to communicate with each other. It is not a “mesh of services” but rather a mesh of API proxies that services call locally. The proxies then relay each call to their intended target service, thus abstracting away all aspects related to the network. By intercepting network communication within the application, service meshes are able to extract metrics, apply service-to-service policies and encrypt the exchange.Service meshes are typically used in cloud-based applications, containers and microservices.

Why Use Linkerd or Any Service Mesh?

Service meshes are a state-of-the-art solution to make service-to-service communication resilient, observable and secure across an entire application. They promise that, by pushing all network communication into a dedicated infrastructure layer, the network with all its intrinsic failure modes can be encapsulated away, thus freeing the developer from the arduous and error-prone task of handling them in their application code. They are best positioned to deliver on this promise when used with stand-alone, moderately scaled microservice applications that run on Kubernetes.

How Linkerd Works

Linkerd runs as a standalone proxy, does not rely on specific languages or libraries and can be used in containers or microservices. The two common deployment models for Linkerd are per-host and as a sidecar.

With per-host deployment, a Linkerd instance can be attached to a physical or virtual host. This then routes all host traffic from each application service instance through the Linkerd instance.

Deploying Linkerd as a sidecar allows one Linkerd instance to be installed per instance of every application service. This is useful for container-based applications. For example, Linkerd can be used in a microservice application that uses Docker containers or Kubernetes pods.

Linkerd can communicate with application services using one of three configurations:

  1. service-to-linker: Each service instance routes traffic through its corresponding Linkerd instance which then handles further traffic rules.
  2. linker-to-service: Linkerd sidecars take and route traffic to the corresponding service instance, rather than service instances receiving traffic directly.
  3. linker-to-linker: A combination of the service-to-linker and linker-to-service configurations that serves incoming traffic with the Linkerd instance which then routes traffic to the corresponding service instance. Then, the service instance routes outgoing traffic back through the Linkerd instance.

Linkerd Architecture

Like all service meshes, Linkerd consists of a control plane and a data plane. The control plane is made up of a main controller component, a web component serving the user dashboard and a metrics component consisting of a modified Prometheus and Grafana. These components control the proxy configurations across the service mesh and process relevant metrics. The data plane consists of the interconnected Linkerd proxies themselves, which are typically deployed as “sidecars” into each service container.

The Linkerd Data Plane

Since the proxy components that make up Linkerd’s data plane are typically injected via command line, adding a service to Linkerd is pretty straightforward. We can merely deploy the service and inject the proxy component as a sidecar. Once the service is equipped with a proxy component, it immediately becomes part of the service mesh.

Linkerd’s proxy component supports the following features:

  • Transparently proxy HTTP, HTTP/2, TCP and WebSocket traffic,
  • Gather telemetry for HTTP and TCP traffic,
  • Provide latency-aware Layer 7 load balancing for HTTP traffic and Layer 4 load balancing for non-HTTP traffic,
  • Automatically secure communication through TLS encryption (although currently labeled “experimental”) and
  • Diagnose failures via a nifty call “tap” facility.

The Linkerd Control Plane

Linkerd’s control plane provides APIs and a user-facing web console, is responsible for proxy configuration, and collects and aggregates data plane metrics. Linkerd’s control plane containers come with an instance of Linkerd’s proxy pre-installed, they are automatically part of the service mesh and can be controlled just like any other service in the mesh. The same is not the case for other service meshes.

Its three main components are:

  • The controller component, a collection of containers implementing the public controller API, the proxy API and other internal functionality.
  • The web component, implementing the end-user UI.
  • A metrics component consisting of a modified Prometheus that is typically coupled with a Grafana instance to provide visualization dashboards.

Benefits of Linkerd

  • Simplifies communication between services in both microservices and containers.
  • Makes the process of documenting how parts of an application interact more manageable.
  • Provides common service mesh features such as latency-aware load balancing, service discovery, tracing and instrumentation.
  • Allows providers to choose which coding language is most appropriate for their service.
  • Offers higher visibility and control through decoupling communication from the main application code.
  • Gives application managers the ability to address communication and mechanics problems without changing the application itself.
  • Makes application code more efficient and easier to scale.

Conclusion

Service meshes like Linkerd are powerful technologies to “make service-to-service connections safe, fast and reliable.” They are also opinionated, complex and can have a significant effect on overall performance. Adopting one will therefore most likely succeed in uniform environments such as self-contained microservice architectures that run on Kubernetes.

Compared to other service meshes, Linkerd has the advantage that it can be deployed per host, as well as per container, which makes it a more flexible choice in mixed environments that run virtual machines as well as containers or where otherwise sidecar injection is not an option.